Goto

Collaborating Authors

 patient safety


AURA: Development and Validation of an Augmented Unplanned Removal Alert System using Synthetic ICU Videos

Seo, Junhyuk, Moon, Hyeyoon, Jung, Kyu-Hwan, Oh, Namkee, Kim, Taerim

arXiv.org Artificial Intelligence

Unplanned extubation (UE)--the unintended removal of an airway tube--remains a critical patient safety concern in intensive care units (ICUs), often leading to severe complications or death. Real-time UE detection has been limited, largely due to the ethical and privacy challenges of obtaining annotated ICU video data. We propose Augmented Unplanned Removal Alert (AURA), a vision-based risk detection system developed and validated entirely on a fully synthetic video dataset. By leveraging text-to-video diffusion, we generated diverse and clinically realistic ICU scenarios capturing a range of patient behaviors and care contexts. The system applies pose estimation to identify two high-risk movement patterns: collision, defined as hand entry into spatial zones near airway tubes, and agitation, quantified by the velocity of tracked anatomical keypoints. Expert assessments confirmed the realism of the synthetic data, and performance evaluations showed high accuracy for collision detection and moderate performance for agitation recognition. This work demonstrates a novel pathway for developing privacy-preserving, reproducible patient safety monitoring systems with potential for deployment in intensive care settings.


ChatFDA: Medical Records Risk Assessment

Tran, M, Sun, C

arXiv.org Artificial Intelligence

In healthcare, the emphasis on patient safety and the minimization of medical errors cannot be overstated. Despite concerted efforts, many healthcare systems, especially in low-resource regions, still grapple with preventing these errors effectively. This study explores a pioneering application aimed at addressing this challenge by assisting caregivers in gauging potential risks derived from medical notes. The application leverages data from openFDA, delivering real-time, actionable insights regarding prescriptions. Preliminary analyses conducted on the MIMIC-III \cite{mimic} dataset affirm a proof of concept highlighting a reduction in medical errors and an amplification in patient safety. This tool holds promise for drastically enhancing healthcare outcomes in settings with limited resources. To bolster reproducibility and foster further research, the codebase underpinning our methodology is accessible on https://github.com/autonlab/2023.hackAuton/tree/main/prescription_checker. This is a submission for the 30th HackAuton CMU.


Assistive Chatbots for healthcare: a succinct review

Bhattacharya, Basabdatta Sen, Pissurlenkar, Vibhav Sinai

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) for supporting healthcare services has never been more necessitated than by the recent global pandemic. Here, we review the state-of-the-art in AI-enabled Chatbots in healthcare proposed during the last 10 years (2013-2023). The focus on AI-enabled technology is because of its potential for enhancing the quality of human-machine interaction via Chatbots, reducing dependence on human-human interaction and saving man-hours. Our review indicates that there are a handful of (commercial) Chatbots that are being used for patient support, while there are others (non-commercial) that are in the clinical trial phases. However, there is a lack of trust on this technology regarding patient safety and data protection, as well as a lack of wider awareness on its benefits among the healthcare workers and professionals. Also, patients have expressed dissatisfaction with Natural Language Processing (NLP) skills of the Chatbots in comparison to humans. Notwithstanding the recent introduction of ChatGPT that has raised the bar for the NLP technology, this Chatbot cannot be trusted with patient safety and medical ethics without thorough and rigorous checks to serve in the `narrow' domain of assistive healthcare. Our review suggests that to enable deployment and integration of AI-enabled Chatbots in public health services, the need of the hour is: to build technology that is simple and safe to use; to build confidence on the technology among: (a) the medical community by focussed training and development; (b) the patients and wider community through outreach.


Adopting AI systems too quickly without full testing could lead to 'errors by health care workers': WHO

FOX News

Dr. Anthony Mazzarelli, the CEO of Cooper University Health Care in New Jersey and an ER physician as well, spoke with Fox News Digital about how Nuance's AI tool is helping physicians focus more on patients and less on paperwork. As the artificial intelligence train barrels on with no signs of slowing down -- some studies have even predicted that AI will grow by more than 37% per year between now and 2030 -- the World Health Organization (WHO) has issued an advisory calling for "safe and ethical AI for health." The agency recommended caution when using "AI-generated large language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and preserve public health." ChatGPT, Bard and Bert are currently some of the most popular LLMs. In some cases, the chatbots have been shown to rival real physicians in terms of the quality of their responses to medical questions.


How Machine Learning is Changing Prescription Delivery

#artificialintelligence

As the healthcare industry continues to evolve, pharmacists play an essential role in delivering patient care. With the increasing demand for pharmaceutical services, pharmacists are looking for ways to improve efficiency while maintaining the highest levels of patient safety. Machine learning is an emerging technology that can help pharmacists deliver prescriptions more effectively and efficiently. In this article, we will explore how machine learning can revolutionize the pharmacy industry. Machine learning is a branch of artificial intelligence that involves developing algorithms that can learn from data and make predictions or decisions based on that data.


Does Ethical AI Development Rely On The "Algorithmically" Underserved? CHAI's Mission

#artificialintelligence

For AI to flourish in healthcare, the industry must focus on the "algorithmically underserved," said John D. Halamka, M.D., M.S., president of Mayo Clinic Platform, at the HLTH 2022 conference this month in Las Vegas. Giving visibility to the algorithmically underserved -- individuals who do not generate enough data/are not well represented enough in health data sets for AI to make a determination -- is just one requirement to overcome the prospect of AI bias in healthcare. And identifying and fixing sources of AI bias must be a focus area for an industry that's striving for ethical and equitable AI development, shared Halamka. Dr. John Halamka is President of Mayo Clinic Platform, and a founding member of the Coalition for ... [ ] Health AI For example, what if there was a national registry that hosted all the metadata needed to power the responsible development of algorithms for use in healthcare? Building this kind of standardization into the relatively black box nature of AI development is among the priorities of The Coalition for Health AI (CHAI), which launched earlier this year.


Industry Spotlight: Mark Fewster, chief product officer with Radar Healthcare

#artificialintelligence

The following is sponsored content. Achieving LFPSE (Learning from Patient Safety Events) compliance is more than just meeting targets – the real driver is transforming patient safety by enabling continuous improvement, says Mark Fewster, chief product officer with Radar Healthcare. The way that health care workers report on patient safety events is changing – and the deadline for making it happen is looming. By March 2023, healthcare organisations in England should have transitioned from the current NRLS (National Reporting and Learning System) and be LFPSE (Learning from Patient Safety Events) compliant. This is more than a change in initials – the new system aims to transform how patient safety events are recorded across the country.


What are the upcoming policies that will shape AI – and are policymakers up to the task?

#artificialintelligence

As vice president and director of governance studies at the Brookings Institution, and a senior fellow at its Center for Technology Innovation, Darrell M. West spends a lot of time thinking about the intersection of policy and emerging tech. In his recent book, Turning Point: Policymaking in the Era of Artificial Intelligence, co-authored with Brookings President John R. Allen, West looks at AI use cases – "from self-driving cars to e-commerce algorithms that seem to know what you want to buy before you do" – and assesses where they're headed and how they will be shaped by policy decisions made today. The key challenge – not least in healthcare, where patient safety is paramount – is to devise regulatory guardrails that maximize the benefits of AI and machine learning and minimize their potentially dangerous downsides. In the book, West and Allen offer a series of recommendations – bolstering governmental oversight, creating new specialized advisory boards at federal agencies, third-party auditing to sniff out algorithmic bias and more. At the upcoming HIMSS Machine Learning & AI for Healthcare event, West will offer a presentation titled "The Latest Regulatory Developments Impacting Machine Learning and AI in Healthcare," where he'll explore potential new policy shifts around clinical uses of artificial intelligence: algorithmic bias, remote patient monitoring, patient safety, fitness trackers and more.


UK seeks overhaul of AI, software as a medical device regs

#artificialintelligence

With the withdrawal of the U.K. from the European Union, MHRA as part of its new Brexit freedoms is moving to update the country's regulations for software and AI as a medical device without the burden of accommodating the regulatory approaches of EU members. "These measures demonstrate the U.K.'s commitment, following our exit from the European Union, to drive innovation in healthcare and improve patient outcomes," states MHRA's announcement. "Regulatory measures will be updated to further protect patient safety and take account of these technological advances." AI and SaMD technologies have the potential for better diagnosing and treating a wide variety of diseases, but FDA has yet to finalize a regulatory framework for machine learning-based software as a medical device. The agency is considering a total product lifecycle-based regulatory framework for adaptive or continuously learning algorithms.


UK MHRA: Transforming the regulation of software and artificial intelligence as a medical device

#artificialintelligence

These measures demonstrate the UK's commitment, following our exit from the European Union, to drive innovation in healthcare and improve patient outcomes. The exciting and fast developing field of software and artificial intelligence (AI) as a medical device has an increasingly prominent role within health systems. Applications of AI to be regulated as medical devices can range from screening, to diagnosis, to treatment, and to management of chronic conditions. Regulatory measures will be updated to further protect patient safety and take account of these technological advances. The MHRA has developed an extensive work programme to inform regulatory changes including key reforms across the software as a medical device lifecycle, from qualification to classification, to requirements that apply pre and post-market.